Growth of AI requires establishing responsible AI policies | HCLTech

Growth of AI requires establishing responsible AI policies

As AI continues to grow and evolve, organizations must start refining their responsible and ethical use of AI
 
6 minutes read
Jordan Smith
Jordan Smith
US Reporter, HCLTech
6 minutes read
Growth of AI requires establishing responsible AI policies

Artificial intelligence has continued to grow in terms of usage with companies starting to carefully define policies and commitments to establish safe, fair, inclusive and accountable use for the development and deployment of AI technology.

The policies these organizations are putting together, however, tend to be broad, general and apply whenever AI systems and AI-generated outputs are used—whether they are developed internally or externally.

According to Leo LinSr Practice Director, Digital Consulting - Digital Strategy & Transformation at HCLTech, as AI capabilities evolve, the responsible corporate AI policies need to keep up with what the technology can do or should do. AI policies should also connect to corporate values on transparency. To succeed, there are challenges that must be overcome when implementing responsible AI policies.

Challenges when implementing responsible AI policies

One of the key challenges when implementing responsible AI policies is clarity in the organizational structure regarding ownership of the policies, or who should be writing them. Further, it also depends on the company, its main product line and how it plans to use AI.

Another challenge is ensuring there is appropriate expertise to interpret the output as uninformed or incorrect use of AI output could cause harm. Additionally, as people’s roles change, they’ll need training to be able to play their in-the-loop and over-the-loop roles, which could require different organizational structures to be set up to manage and leverage the AI technology.

AI governance is a concern for organizations because it raises various questions that need to be addressed. Organizations need to determine what recommendations or decisions AI should be allowed to make, along with what decisions are off limits. Companies will need to figure out if they should block access to GenAI for certain enterprise functions and what level of human oversight is required in which component of the AI system.

Taking a risk-based approach to AI oversight

A risk-based approach for AI oversight should assess the risk of the impact AI technology has in the context of specific use cases and applications. It should account for the benefits of a proposed AI application or the risk of not proceeding with its development or deployment.

When using a risk-based approach to AI oversight, organizations should quantify or qualify the severity, reversibility and likelihood of harm occurring.

“Acknowledge that risk may be subjective,” Lin says of risk-based approaches. “There are differences in risk appetite and some risks are unknowable. Decisions should be well-reasoned, not based on ‘intuition’ or made on a whim.”

Further, organizations should decide on and design appropriate levels of human oversight and other mitigations. Awareness that applicable laws may require specific levels of human oversight and intervention is critical to taking a risk-based approach. In such cases, project teams must comply with legal requirements.

“Not all risks may be mitigable and there may be residual risks,” Lin says. “Organizations must decide if the risks are acceptable.”

AI’s susceptibility to bias

Lin also mentions that we should do the same vetting with AI Business Intelligence tools as we do with our doctors, lawyers, consultants, or accountants. We want to know their background, credentials and experience. Organizations should bring that same mindset to implementing responsible AI.

“Before we can leverage the power of AI, we need to understand how it works so that we can trust its outputs,” Lin said. “What are the limitations and strengths for this specific tool? What pool of source data is it drawing its conclusions from? And if the system has any unintended built-in bias.”

While having a strong governance process is good practice, it’s also important for everyone to understand AI’s limitations. Any AI solution is constrained by its design and the source data it’s given.

HCLTech supercharges demerger for UD Trucks, powered by scaled digital transformation

Watch the video

How HCLTech helps clients navigate AI landscape

HCLTech is dedicated to helping clients navigate the evolving AI landscape to ensure they’re on the best path for their organization. There are five key ways that HCLTech can help clients, including:

  • Coaching and guiding clients on the development policies and procedures for ethical and responsible use of AI, including a risk-based approach to using AI.
  • Advising on IT strategy and selecting AI technology such as machine learning, statistical modeling and natural language processing.
  • Supporting Organizational Change Management for implementing new policies, procedures and technologies, as well as designing a governance organization and structure.
  • Assisting with Project Management and technical implementation of AI technologies.
  • Coaching and guiding clients on XD/EX/CX as they implement and integrate AI into their internal and external facing functions: Benefits, Helpdesk, Customer Service, etc.

HCLTech also has a host of AI solutions that help organizations identify AI opportunities, develop strategies and create roadmaps for implementing AI solutions. The approach is to aggregate HCLTech’s AI and GenAI offerings and capabilities into a comprehensive, fully integrated consultancy and delivery platform that is designed to respond to an organization’s industry-specific needs and to maximize value.

Share On